Introduction

When to use a time series vs. a linear regression?

If you have continuous target variable, then it is a regression problem. For instance, in flight trials we have the flight distance to predict, which is continuous. Hence this becomes a regression problem. About time series, when the datapoints are time dependent, then it becomes a time series problem. Each data point has an order and is, typically, related to the data points before and after by some underlying process.

In turn, a times series regression is a statistical method for predicting a future response based on the response history (known as autoregressive dynamics) and the transfer of dynamics from relevant predictors. Time series regression is commonly used for modeling and forecasting of economic, financial, and biological systems.

There are three concepts then to keep in mind:

  • Stationary: if any trends or patterns in the time series are independent with time. (A linear regression equivalent to a time series at stationarity). There is strong and weak stationary.

  • Non-Stationary: trends dependent in time in the mean or statistical deviation of the data points.

  • Autocorrelation: “memory”; the degree to which time series values in period(t) are related to time series values in periods (t+1, t+2, t+3…). For this, we can test to see how long certain values last in a system.

There are also three words and their definitions to remember:

  • variance = amplitude of the data points
  • mean = the central value of the data points
  • covariance = frequency of the data points

Questions about time series often asked include,

  • Is there any trend, seasonality, or any outliers?
  • Is there a long-run cycle or period unrelated to seasonality factors?
  • Is there a constant variance over time?
  • Are there any abrupt changes to either the level of the series or the variance?

So how can you answer these questions through a time series regression and code? And where do you begin to test stationarity or visualize autocorrelation? And finally, how do you handle non-stationarity?

  1. First, begin by cleaning the data.

Cleaning Data and Sourcing Scripts

source_path = "~/Desktop/git_repositories/SBB-dispersal/avbernat_working_on/Rsrc/"

script_names = c("compare_models.R",
                 "regression_output.R",
                 "clean_morph_data.R",
                 "AICprobabilities.R")

for (script in script_names) { 
  path = paste0(source_path, script)
  source(path) 
}

source("~/Desktop/git_repositories/SBB-dispersal/avbernat_working_on/RTsrc/ts_auxiliary_funs.R")

Read the data

  1. Extract date information in order to prep it for converting it into a datetime object read by the xts() function.

  2. Merge close dates together in order to have as equally spaced datapoints as possible. The acf() assumes that the data are regular spaced. The acf computes estimates of the autocovariance or autocorrelation function. We need this because it will tell us, if there are patterns, what the mathematical function of those patterns are.

data_list <- read_morph_data("data/allmorphology9.21.20.csv")
raw_data = data_list[[1]]
all_bugs = nrow(raw_data)
# data_long = data_list[[2]] # need to fix this 

# Remove individuals with torn wings first
raw_data$drop <- FALSE
for(row in 1:nrow(raw_data)){
    if(length(unlist(strsplit(strsplit(paste("test ", raw_data$notes[row], " test", sep=""), "torn")[[1]], "wing")))>2){
         #browser() 
         raw_data$drop[row] <- TRUE
         }
}
raw_data <- raw_data[raw_data$drop==FALSE,]
clean_bugs = nrow(raw_data)

cat("number of bugs with torn wings:", all_bugs - clean_bugs, "\n\n")
## number of bugs with torn wings: 141
# Datetime
raw_data$date <- paste(raw_data$month, raw_data$year, sep="/")
raw_data$datetime <- as.yearmon(raw_data$date, "%B/%Y")
raw_data$datetime <- as.factor(raw_data$datetime)
n_missing_dates = nrow(raw_data[is.na(raw_data$datetime),])

# merge May 2015 with April 2015 because very few bugs were collected in May 2015.
# then merge April 2013 with May 2013 to make the time datapoints more evenly distanced
raw_data$date[raw_data$date == "May/2015"] = "April/2015"
raw_data$date[raw_data$date == "May/2013"] = "April/2013"

# convert to yearmon object and then factor
raw_data$datetime <- as.yearmon(raw_data$date, "%B/%Y")
raw_data$datetime <- as.factor(raw_data$datetime)

cat("number of missing dates:", n_missing_dates, "\n\n")
## number of missing dates: 296
unique(raw_data$datetime)
##  [1] Apr 2013 Apr 2014 Apr 2015 Dec 2013 Aug 2017 Dec 2016 Sep 2018 May 2019
##  [9] Oct 2019 Feb 2020 <NA>    
## 10 Levels: Apr 2013 Dec 2013 Apr 2014 Apr 2015 Dec 2016 Aug 2017 ... Feb 2020

Looks like we have a measurement at least once per year, which is good. However, we only have 10 time points.

na.omit(unique(raw_data$datetime))
##  [1] Apr 2013 Apr 2014 Apr 2015 Dec 2013 Aug 2017 Dec 2016 Sep 2018 May 2019
##  [9] Oct 2019 Feb 2020
## attr(,"na.action")
## [1] 11
## attr(,"class")
## [1] omit
## 10 Levels: Apr 2013 Dec 2013 Apr 2014 Apr 2015 Dec 2016 Aug 2017 ... Feb 2020
# equation: +-2/sqrt(T) where T is the number of obs
n=length(na.omit(unique(raw_data$datetime)))
2/sqrt(n)
## [1] 0.6324555

This would make our time series a borderline-short time series rather than a long time series. It may make more sense to do a GLMM that accounts for temporal dependencies rather than try to have an autoregressive model conform to this data. However, doing a GLMM assumes that temporal dependencies are the same between any two time observations, whereas an autoregressive model will try to account for and define any decay.

I continued applying autoregressive models on the data to see how it would span out. Long story short, we need more time points.

ts All Data

Wing Length

  1. Remove missing wing and datetime values.

  2. Compute the wing length averages and generate datetime objects using as.Date(). In order to be processed by the as.Date() function, all time objects need a date, month, and year.

  3. Use the xts & zoo R Libraries to read cleanly index the data by a formal time object (collection_time).

  4. Optional: include major events that occurred in the 2010’s. This could be events that you find biologically significant to your questions. E.g. how did major hurricanes in Florida impact average soapberry bug wing length, wing2body ratios, and/or wing-morph frequencies across Florida?

Hurricane Source: https://en.wikipedia.org/wiki/List_of_Florida_hurricanes#2000–present

# remove NA dates
d = raw_data %>%
  filter(!is.na(wing), !is.na(datetime))

# prep dataset for xts()
ts_cols = clean_for_ts(d, contin_var="wing", cat_var="datetime", func="mean")
wing_avg = ts_cols[[1]]
date = ts_cols[[2]]

# events
FL_major_hurr = c("Sep 2017 10", "Oct 2018 10") # Irma (s), Michael (n)
hurr_dates = as.Date(FL_major_hurr, "%b %Y %d")

events <- xts(c("Irma (S. FL)", "Michael (N. Fl)"), 
              hurr_dates)
  1. Look for temporal dependencies between consecutive years. This helps you get a quick idea of how strong time effects are.
plot_consecutive_yrs(wing_avg, "wing length (mm)") # not strong; the points are very dispersed

  1. Create xts-zoo object and plot the time series data. Add any events to the plot with addEventLines().

  2. Run the Augmented Dickey-Fuller Test to test for stationarity where the null hypothesis is non-stationarity and the alternative hypothesis is stationarity.

  3. Finally, use an ACF (autocorrelation function) and PCF (partial-autocorrelation function) plot to identify temporal dependence in the data. Autocorrelation measures the linear relationship between lagged values of a time series.

\(R_s = Corr(x_t, x_{t+s})\) for lag \(s\).

There are two horizontal, blue, dashed lines as well. Those represent the significance threshold, where only the spikes that exceed this dashed line are considered significant. For a white noise series, we expect 95% of the spikes in the ACF to lie within +- 2/sqrt(T) where T is the length of the time series. In most acf plots below the thresholds will be equal to +- 2/sqrt(10) = -+ 0.63. If one or more large spikes are outside these bounds, or if substantially more than 5% of spikes are outside these bounds, then the series is probably not white noise.

In other words, these plots describe how well the present value of the series is related with its past values. Since, none do significantly, then there is no AR (a present value of the time series cannot be obtained using previous values of the same time series).

## 
##  Augmented Dickey-Fuller Test
## 
## data:  dx[, 1]
## Dickey-Fuller = -2.5124, Lag order = 2, p-value = 0.3772
## alternative hypothesis: stationary

Interpretation:

Stationarity can be defined in precise mathematical terms, but for our purpose we mean a flat looking series, without trend, constant variance over time, a constant autocorrelation structure over time and no periodic fluctuations (seasonality). So the high p value means we do have non-stationarity. Similarly, the augmented Dickey–Fuller (ADF) statistic, used in the test, is a negative number. The more negative it is, the stronger the rejection of the null hypothesis. This is not negative enough.

Finally, the lag length (lag order) is how many terms back down the AR process you want to test for serial correlation. This is a non-stationary with AR(2).

  1. Detrending and Dedrifting Data. Detrending is removing a trend from a time series; a trend usually refers to a change in the mean over time, but the overall goal is to get to stationarity. When you detrend data, you remove an aspect from the data that you think is causing some kind of distortion. For example, you might detrend data that shows an overall increase, in order to see subtrends. Usually, these subtrends are seen as fluctuations on a time series graph. To remove the linear trend, differencing is equivalent to applying a linear regression on time - “regress out” covariates.

Differencing is when a new series is constructed where the value at the current time step is calculated as the difference between the original observation and the observation at the previous time step.

detrend(wing_mm) # this is stationary
## 
##  Augmented Dickey-Fuller Test
## 
## data:  dx$diff
## Dickey-Fuller = -3.7455, Lag order = 2, p-value = 0.03961
## alternative hypothesis: stationary

addEventLines(events, pos=2, srt=90, col="red")

Interpretation: This time series is stationary! And as a reminder, a data that is just noise will be stationary.

  1. You can also take the log() of the time series in order to stable the variance of the time series if there is a trend in the variance. Stationarity was achieved without having to stablize the variance, but you can still check:
dedrift(wing_mm) # this is stationary
## Warning in adf.test(dx$logv): p-value smaller than printed p-value
## 
##  Augmented Dickey-Fuller Test
## 
## data:  dx$logv
## Dickey-Fuller = -6.4449, Lag order = 2, p-value = 0.01
## alternative hypothesis: stationary

addEventLines(events, pos=2, srt=90, col="red")

dedriftrend(wing_mm) # this is stationary
## 
##  Augmented Dickey-Fuller Test
## 
## data:  dx$logdiff
## Dickey-Fuller = -4.2683, Lag order = 2, p-value = 0.0139
## alternative hypothesis: stationary

addEventLines(events, pos=2, srt=90, col="red")

Wing2body

# Get only bugs with long wings
data_long<-raw_data[raw_data$w_morph=="L",]

# Calculate wing2body ratio for bugs with long wings 
data_long$wing2body <- data_long$wing/as.numeric(data_long$body)
# remove NA dates
d = data_long %>%
  filter(!is.na(wing2body), !is.na(datetime))

# prep dataset for xts()
ts_cols = clean_for_ts(d, contin_var="wing2body", cat_var="datetime", func="mean")
ratio_avg = ts_cols[[1]]
ratio_avg = ratio_avg[!is.na(ratio_avg)]
date = ts_cols[[2]]
plot_consecutive_yrs(ratio_avg, "wing2body")

ratio = xts(ratio_avg, date)
colnames(ratio) <- "wing2body"
check_stationarity(ratio) # this is not stationary
## Warning in adf.test(dx[, 1]): p-value greater than printed p-value
## 
##  Augmented Dickey-Fuller Test
## 
## data:  dx[, 1]
## Dickey-Fuller = 10.559, Lag order = 2, p-value = 0.99
## alternative hypothesis: stationary

addEventLines(events, pos=2, srt=90, col="red")

#dedrift(ratio) # this is not stationary
detrend(ratio) # this is stationary
## Warning in adf.test(dx$diff): p-value smaller than printed p-value
## 
##  Augmented Dickey-Fuller Test
## 
## data:  dx$diff
## Dickey-Fuller = -7.7839, Lag order = 1, p-value = 0.01
## alternative hypothesis: stationary

addEventLines(events, pos=2, srt=90, col="red")

dedriftrend(ratio) # this is stationary
## Warning in adf.test(dx$logdiff): p-value smaller than printed p-value
## 
##  Augmented Dickey-Fuller Test
## 
## data:  dx$logdiff
## Dickey-Fuller = -7.6591, Lag order = 1, p-value = 0.01
## alternative hypothesis: stationary

addEventLines(events, pos=2, srt=90, col="red")

Wing Morph Frequency

raw_data$w_morph_binom <- NA
raw_data$wing_morph_binom[raw_data$w_morph=="S"]<-0
raw_data$wing_morph_binom[raw_data$w_morph=="L"]<-1
# remove NA dates and wing morph (S=0, L=1)
d = raw_data %>%
  filter(!is.na(wing_morph_binom), !is.na(datetime))

# prep dataset for xts()
ts_cols = clean_for_ts(d, contin_var="wing_morph_binom", cat_var="datetime", func="mean")
freq_avg = ts_cols[[1]]
date = ts_cols[[2]]
plot_consecutive_yrs(freq_avg, "morph freq") # essentially no temporal corr

morph = xts(freq_avg, date)
colnames(morph) <- "wing_morph_freq"
check_stationarity(morph) # this is non-stationary
## 
##  Augmented Dickey-Fuller Test
## 
## data:  dx[, 1]
## Dickey-Fuller = -1.8526, Lag order = 2, p-value = 0.6285
## alternative hypothesis: stationary

addEventLines(events, pos=2, srt=90, col="red")

#detrend(morph) # this is not stationary
dedriftrend(morph) # this is not stationary
## 
##  Augmented Dickey-Fuller Test
## 
## data:  dx$logdiff
## Dickey-Fuller = -2.0047, Lag order = 2, p-value = 0.5706
## alternative hypothesis: stationary

dedrift(morph) # this is stationary
## Warning in adf.test(dx$logv): p-value smaller than printed p-value
## 
##  Augmented Dickey-Fuller Test
## 
## data:  dx$logv
## Dickey-Fuller = -27.979, Lag order = 2, p-value = 0.01
## alternative hypothesis: stationary

addEventLines(events, pos=2, srt=90, col="red")

ts Grouped by Sex

females = raw_data[raw_data$sex=="F",]
males = raw_data[raw_data$sex=="M",]

Females

Wing Length

# remove NA dates
d = females %>%
  filter(!is.na(wing), !is.na(datetime))

# prep dataset for xts()
ts_cols = clean_for_ts(d, contin_var="wing", cat_var="datetime", func="mean")
wing_avg = ts_cols[[1]]
date = ts_cols[[2]]
## 
##  Augmented Dickey-Fuller Test
## 
## data:  dx[, 1]
## Dickey-Fuller = -2.8815, Lag order = 2, p-value = 0.2366
## alternative hypothesis: stationary

Looks like there’s not a linear trend, but changes in variation over time.

#detrend(wing_mmf) # this is not stationary
dedrift(wing_mmf) # this is stationary
## Warning in adf.test(dx$logv): p-value smaller than printed p-value
## 
##  Augmented Dickey-Fuller Test
## 
## data:  dx$logv
## Dickey-Fuller = -5.4041, Lag order = 2, p-value = 0.01
## alternative hypothesis: stationary

addEventLines(events, pos=2, srt=90, col="red")

Wing2body

# remove NA dates
d = data_long %>%
  filter(!is.na(wing2body), !is.na(datetime))

d = d[d$sex=="F",]

# prep dataset for xts()
ts_cols = clean_for_ts(d, contin_var="wing2body", cat_var="datetime", func="mean")
ratio_avg = ts_cols[[1]]
ratio_avg = ratio_avg[!is.na(ratio_avg)]
date = ts_cols[[2]]
ratiof = xts(ratio_avg, date)
colnames(ratiof) <- "wing2body"
check_stationarity(ratiof) # this is not stationary
## Warning in adf.test(dx[, 1]): p-value greater than printed p-value
## 
##  Augmented Dickey-Fuller Test
## 
## data:  dx[, 1]
## Dickey-Fuller = 1.4005, Lag order = 2, p-value = 0.99
## alternative hypothesis: stationary

addEventLines(events, pos=2, srt=90, col="red")

#detrend(ratiof) # not stationary
#dedrift(ratiof) # not stationary
#dedriftrend(ratiof) # still not stationary
#addEventLines(events, pos=2, srt=90, col="red") # *still* not stationary

Will need to use another function - many like a polynomial to detrend.

Wing Morph Frequency

# remove NA dates and wing morph (S=0, L=1)
d = raw_data %>%
  filter(!is.na(wing_morph_binom), !is.na(datetime))

d = d[d$sex=="F",]

# prep dataset for xts()
ts_cols = clean_for_ts(d, contin_var="wing_morph_binom", cat_var="datetime", func="mean")
freq_avg = ts_cols[[1]]
date = ts_cols[[2]]
morphf = xts(freq_avg, date)
colnames(morphf) <- "wing_morph_freq"
check_stationarity(morphf) # not stationary
## 
##  Augmented Dickey-Fuller Test
## 
## data:  dx[, 1]
## Dickey-Fuller = -0.77584, Lag order = 2, p-value = 0.952
## alternative hypothesis: stationary

addEventLines(events, pos=2, srt=90, col="red")

#detrend(morphf) # not stationary
#dedrift(morphf) # not stationary
#dedriftrend(morphf) # still not stationary
#addEventLines(events, pos=2, srt=90, col="red")

Will need to use another function here too.

Males

Wing Length

# remove NA dates
d = males %>%
  filter(!is.na(wing), !is.na(datetime))

# prep dataset for xts()
ts_cols = clean_for_ts(d, contin_var="wing", cat_var="datetime", func="mean")
wing_avg = ts_cols[[1]]
date = ts_cols[[2]]
## 
##  Augmented Dickey-Fuller Test
## 
## data:  dx[, 1]
## Dickey-Fuller = -0.83612, Lag order = 2, p-value = 0.9447
## alternative hypothesis: stationary

#dedrift(wing_mmm) # not stationary
detrend(wing_mmm) # stationary
## Warning in adf.test(dx$diff): p-value smaller than printed p-value
## 
##  Augmented Dickey-Fuller Test
## 
## data:  dx$diff
## Dickey-Fuller = -15.246, Lag order = 2, p-value = 0.01
## alternative hypothesis: stationary

addEventLines(events, pos=2, srt=90, col="red")

dedriftrend(wing_mm) # stationary
## 
##  Augmented Dickey-Fuller Test
## 
## data:  dx$logdiff
## Dickey-Fuller = -4.2683, Lag order = 2, p-value = 0.0139
## alternative hypothesis: stationary

addEventLines(events, pos=2, srt=90, col="red")

Wing2body

# remove NA dates
d = data_long %>%
  filter(!is.na(wing2body), !is.na(datetime))

d = d[d$sex=="M",]

# prep dataset for xts()
ts_cols = clean_for_ts(d, contin_var="wing2body", cat_var="datetime", func="mean")
ratio_avg = ts_cols[[1]]
ratio_avg = ratio_avg[!is.na(ratio_avg)]
date = ts_cols[[2]]
ratiom = xts(ratio_avg, date)
colnames(ratiom) <- "wing2body"
check_stationarity(ratiom) # not stationary
## 
##  Augmented Dickey-Fuller Test
## 
## data:  dx[, 1]
## Dickey-Fuller = -0.44044, Lag order = 2, p-value = 0.9776
## alternative hypothesis: stationary

addEventLines(events, pos=2, srt=90, col="red")

#dedrift(ratiom) not stationary
#dedriftrend(ratiom) # this is stationary
#addEventLines(events, pos=2, srt=90, col="red") # the same  as detrend
detrend(ratiom) # this is stationary
## Warning in adf.test(dx$diff): p-value smaller than printed p-value
## 
##  Augmented Dickey-Fuller Test
## 
## data:  dx$diff
## Dickey-Fuller = -7.954, Lag order = 1, p-value = 0.01
## alternative hypothesis: stationary

addEventLines(events, pos=2, srt=90, col="red")

Wing Morph Frequency

# remove NA dates and wing morph (S=0, L=1)
d = raw_data %>%
  filter(!is.na(wing_morph_binom), !is.na(datetime))

d = d[d$sex=="M",]

# prep dataset for xts()
ts_cols = clean_for_ts(d, contin_var="wing_morph_binom", cat_var="datetime", func="mean")
freq_avg = ts_cols[[1]]
date = ts_cols[[2]]
morphm = xts(freq_avg, date)
colnames(morphm) <- "wing_morph_freq"
check_stationarity(morphm) # not stationary
## 
##  Augmented Dickey-Fuller Test
## 
## data:  dx[, 1]
## Dickey-Fuller = -1.3425, Lag order = 2, p-value = 0.8229
## alternative hypothesis: stationary

addEventLines(events, pos=2, srt=90, col="red")

#detrend(morphm) # not stationary
#dedriftrend(morphm) # not stationary
dedrift(morphm) # close! But marginally not stationary still
## 
##  Augmented Dickey-Fuller Test
## 
## data:  dx$logv
## Dickey-Fuller = -3.5464, Lag order = 2, p-value = 0.05744
## alternative hypothesis: stationary

addEventLines(events, pos=2, srt=90, col="red")

ts Grouped by Host Plant

BV = raw_data[raw_data$pophost == "C.corindum",]
GRT = raw_data[raw_data$pophost == "K.elegans",]

Balloon Vine

Wing Length

# remove NA dates
d = BV %>%
  filter(!is.na(wing), !is.na(datetime))

ts_cols = clean_for_ts(d, contin_var="wing", cat_var="datetime", func="mean")
wing_avg = ts_cols[[1]]
date = ts_cols[[2]]
## 
##  Augmented Dickey-Fuller Test
## 
## data:  dx[, 1]
## Dickey-Fuller = -1.2393, Lag order = 2, p-value = 0.8622
## alternative hypothesis: stationary

# detrend(wing_BV) # not stationary
#dedriftrend(wing_BV) # not stationary
dedrift(wing_BV) # stationary
## Warning in adf.test(dx$logv): p-value smaller than printed p-value
## 
##  Augmented Dickey-Fuller Test
## 
## data:  dx$logv
## Dickey-Fuller = -7.4026, Lag order = 2, p-value = 0.01
## alternative hypothesis: stationary

addEventLines(events, pos=2, srt=90, col="red")

Wing2body

# remove NA dates
d = data_long %>%
  filter(!is.na(wing2body), !is.na(datetime))

d = d[d$pophost=="C.corindum",]

# prep dataset for xts()
ts_cols = clean_for_ts(d, contin_var="wing2body", cat_var="datetime", func="mean")
ratio_avg = ts_cols[[1]]
ratio_avg = ratio_avg[!is.na(ratio_avg)]
date = ts_cols[[2]]
ratioBV = xts(ratio_avg, date)
colnames(ratioBV) <- "wing2body"
check_stationarity(ratioBV) # not stationary
## Warning in adf.test(dx[, 1]): p-value greater than printed p-value
## 
##  Augmented Dickey-Fuller Test
## 
## data:  dx[, 1]
## Dickey-Fuller = 2.1246, Lag order = 2, p-value = 0.99
## alternative hypothesis: stationary

addEventLines(events, pos=2, srt=90, col="red")

#dedrift(ratioBV) # not stationary
detrend(ratioBV) # stationary
## Warning in adf.test(dx$diff): p-value smaller than printed p-value
## 
##  Augmented Dickey-Fuller Test
## 
## data:  dx$diff
## Dickey-Fuller = -15.92, Lag order = 1, p-value = 0.01
## alternative hypothesis: stationary

addEventLines(events, pos=2, srt=90, col="red")

#dedriftrend(ratioBV) # stationary
#addEventLines(events, pos=2, srt=90, col="red") # same as detrend

Over time, the wing2body ratio of the balloon vine bugs is decreasing. However, ACF and PACF not significant.

Wing Morph Frequency

# remove NA dates and wing morph (S=0, L=1)
d = raw_data %>%
  filter(!is.na(wing_morph_binom), !is.na(datetime))

d = d[d$pophost=="C.corindum",]

# prep dataset for xts()
ts_cols = clean_for_ts(d, contin_var="wing_morph_binom", cat_var="datetime", func="mean")
freq_avg = ts_cols[[1]]
date = ts_cols[[2]]
morphBV = xts(freq_avg, date)
colnames(morphBV) <- "wing_morph_freq"
check_stationarity(morphBV) # not stationary
## 
##  Augmented Dickey-Fuller Test
## 
## data:  dx[, 1]
## Dickey-Fuller = -1.3716, Lag order = 2, p-value = 0.8118
## alternative hypothesis: stationary

addEventLines(events, pos=2, srt=90, col="red")

#detrend(morphBV) # not stationary
#dedrift(morphBV) # not stationary
#dedriftrend(morphBV) # not stationary

Will need to apply another function.

Golden Rain Tree

Wing Length

# remove NA dates
d = GRT %>%
  filter(!is.na(wing), !is.na(datetime))

ts_cols = clean_for_ts(d, contin_var="wing", cat_var="datetime", func="mean")
wing_avg = ts_cols[[1]]
date = ts_cols[[2]]
## 
##  Augmented Dickey-Fuller Test
## 
## data:  dx[, 1]
## Dickey-Fuller = -1.5249, Lag order = 2, p-value = 0.7534
## alternative hypothesis: stationary

detrend(wing_GRT) # stationary
## Warning in adf.test(dx$diff): p-value smaller than printed p-value
## 
##  Augmented Dickey-Fuller Test
## 
## data:  dx$diff
## Dickey-Fuller = -4.6245, Lag order = 2, p-value = 0.01
## alternative hypothesis: stationary

addEventLines(events, pos=2, srt=90, col="red")

dedrift(wing_GRT) # stationary
## Warning in adf.test(dx$logv): p-value smaller than printed p-value
## 
##  Augmented Dickey-Fuller Test
## 
## data:  dx$logv
## Dickey-Fuller = -19.918, Lag order = 2, p-value = 0.01
## alternative hypothesis: stationary

addEventLines(events, pos=2, srt=90, col="red")

Wing2body

# remove NA dates
d = data_long %>%
  filter(!is.na(wing2body), !is.na(datetime))

d = d[d$pophost=="K.elegans",]

# prep dataset for xts()
ts_cols = clean_for_ts(d, contin_var="wing2body", cat_var="datetime", func="mean")
ratio_avg = ts_cols[[1]]
ratio_avg = ratio_avg[!is.na(ratio_avg)]
date = ts_cols[[2]]
ratioGRT = xts(ratio_avg, date)
colnames(ratioGRT) <- "wing2body"
check_stationarity(ratioGRT) # not stationary
## 
##  Augmented Dickey-Fuller Test
## 
## data:  dx[, 1]
## Dickey-Fuller = -0.29597, Lag order = 2, p-value = 0.9837
## alternative hypothesis: stationary

addEventLines(events, pos=2, srt=90, col="red")

detrend(ratioGRT) # stationary
## Warning in adf.test(dx$diff): p-value smaller than printed p-value
## 
##  Augmented Dickey-Fuller Test
## 
## data:  dx$diff
## Dickey-Fuller = -5.9149, Lag order = 1, p-value = 0.01
## alternative hypothesis: stationary

addEventLines(events, pos=2, srt=90, col="red")

dedrift(ratioGRT) # stationary
## 
##  Augmented Dickey-Fuller Test
## 
## data:  dx$logv
## Dickey-Fuller = -3.7529, Lag order = 1, p-value = 0.03908
## alternative hypothesis: stationary

addEventLines(events, pos=2, srt=90, col="red")

Wing Morph Frequency

# remove NA dates and wing morph (S=0, L=1)
d = raw_data %>%
  filter(!is.na(wing_morph_binom), !is.na(datetime))

d = d[d$pophost=="K.elegans",]

# prep dataset for xts()
ts_cols = clean_for_ts(d, contin_var="wing_morph_binom", cat_var="datetime", func="mean")
freq_avg = ts_cols[[1]]
date = ts_cols[[2]]
morphGRT = xts(freq_avg, date)
colnames(morphGRT) <- "wing_morph_freq"
check_stationarity(morphGRT) # not stationary
## 
##  Augmented Dickey-Fuller Test
## 
## data:  dx[, 1]
## Dickey-Fuller = -2.8379, Lag order = 2, p-value = 0.2532
## alternative hypothesis: stationary

addEventLines(events, pos=2, srt=90, col="red")

#detrend(morphGRT) # not stationary
dedrift(morphGRT) # stationary
## Warning in adf.test(dx$logv): p-value smaller than printed p-value
## 
##  Augmented Dickey-Fuller Test
## 
## data:  dx$logv
## Dickey-Fuller = -25.996, Lag order = 2, p-value = 0.01
## alternative hypothesis: stationary

addEventLines(events, pos=2, srt=90, col="red")

Citation

Rob J Hyndman and George Athanasopoulos. Forecasting: Principles and Practice. https://otexts.com/fpp3/.

Conclusion

We have very few points (10). Ideal would be 15-25 observations. Because of the few points, none of the spikes in the ACF or PACF are significant even though the Augmented Dickey-Fuller Test keeps testing for non-stationarity. In turn, this is a short time series that can only be evaluated for temporal dependencies in another modeling framework (e.g. GLMM, GAMM, or LOESS). Otherwise, in AR, these time series would all fall under white noise.

White Noise

In our limited datasets, there is no evidence of non-randomness because our autocorrelations were NOT statistically significant. There were instances where autocorrelations were close to being statistically significant, but it would be ideal to have at least 15 observations in order to test that. Why? Because at 15 observations the significance threshold would lower from 0.63 to 0.52, which could lead to more significant spikes.

In turn, our datasets have white noise. White noise are variations in your data that cannot be explained by any regression model.

https://towardsdatascience.com/the-white-noise-model-1388dbd0a7d